EEG signals and task-independent person-specific signatures

Date:

Recent studies have shown interest in using brain signals collected through electroencephalography (EEG) as a reliable biometric. However, owing to limited data, the focus of the field has been limited to classical techniques using different elicitation protocols. The primary interest for studying different elicitation protocols is that individuals will have a distinctive signature for a given task which can be leveraged to identify them. We conjecture that the biometric signatures should be present in the EEG signal irrespective of tasks or state of the brain.

Both speech and brain signals are temporal data which can be described as a sequence of observations; the analysis of these signals to get the required information has always been a challenging task. In this talk, we draw parallels between the problem of biometric recognition using speech and brain signals. For decades, owing to the enormous amount of data available, a lot of machine learning and statistical techniques have been proposed to extract biometric information from a speech signal irrespective of the spoken content. Using the universal background model - Gaussian mixture model (UBM-GMM), a text-independent speaker verification technique, we first verify the conjecture that biometric signatures should be present in the EEG signal independent of task. Later, we extend and modify the current state-of-art subspace-based speaker verification techniques to identify individuals from EEG signals irrespective of the task reliably.